-
Notifications
You must be signed in to change notification settings - Fork 338
Fix #881 Snippets Unit Test Assertion Failure #892
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportPatch coverage:
Additional details and impacted files@@ Coverage Diff @@
## main #892 +/- ##
=======================================
Coverage 98.93% 98.93%
=======================================
Files 84 84
Lines 14236 14296 +60
=======================================
+ Hits 14084 14144 +60
Misses 152 152
☔ View full report in Codecov by Sentry. |
NimaSarajpoor
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@seanlaw
Do you think we can come up with a more elegant solution?
| npt.assert_almost_equal(D_ref, D_comp) | ||
|
|
||
|
|
||
| def test_snippets(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment to show the purpose of this particular test function.
For example, we may write:
# This test function raises an error due to small, cumulated loss of precisions in `snippet_profiles`.
# To avoid that, the code needs to be revised to reduce the loss of precision.
It feels like a hack and has a code smell. In this instance, how do we know which direction (AB vs BA) is suffering from a loss of precision? |
I agree!
Need to dig deeper :) Will provide an update. |
|
[update]
For the case added to the test function, the answer is Both! However, the loss of imprecision in one resulted in increasing the ref by +1e-14, and in the other, resulted in increasing the ref by -1e-14. since the delta value are not identical, it resulted in a change in the final outcome! I followed the code all the way and noticed this particular loss of precision is coming from the function Note 1: I set |
b09dfe8 to
df6a9b1
Compare
|
@seanlaw |
|
@seanlaw As a follow up to my previous commnet: Alternative to the last commit: with The problem should be solved since, in this case, swapping the values Additional note 1: I also tried |
|
@NimaSarajpoor Can you tell me a set of values for With those same values, are you able to check if |
Can you please add this test case to |
I think this is related to just python (I ran my simple experiment outside of numba!)
How about this: Then, run the same thing but change the order of multiplication. Check In case you just need one example:
Do you think I should still check this given that the issue is coming from python?
Will do it 🙂 |
|
Thanks @NimaSarajpoor. It looks like this would make things more precise: However, note that you'll have to cast it back to What if we did: I wonder if this would affect the performance since it is a deeply nested function that gets called a lot. I am reading some stories that |
|
Just to make sure we are on the same page: are we talking about comparing the performance of stump? (And maybe gpu_stump too?) |
|
(New lesson for me: avoid hack as much as possible! Now that I think more, and after that I read your "Decimal" approach, I can see that swapping the variables is still like a hack! These hacks should be our last resort! Thank you for the lesson !!) |
So, since the problem is really in Does that make sense? Please make sure to turn
I'm glad that you understood. It is certainly easier to hack together a solution but it is another thing to:
|
|
@seanlaw
I will go and apply your proposal to the code for both |
Sounds good! |
|
[update] Apparently, numba does not support decimal types. See: this Github issue and this post in numba Discourse. [just for our future reference if needed] |
|
Thank you. I find it rather surprising that adding parentheses did not solve the problem. Also, can you please add the precision unit test above for |
Yes, it is surprising indeed! Btw, I tried In an isolated test, it seems that it can solve the precision issue. But, when I want to use it in njit func, I get numba error: |
will do. |
So, when I do: then the output is different: However, if I add parentheses: the issue appears to be resolved: Similarly, adding parentheses here: Also produces identical results |
4dd570a to
2205916
Compare
|
@seanlaw Adding parentheses [for multiplication in GPU] did not resolve the assertion failure. But, when I re-order the multiplication, it resolves the loss-of-precision issue. Can you please try it out on your end as well? |
Excellent. I think the GPU test looks great! Testing on Colab now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please the latest commit into this repo? This way the test.sh is the the most up-to-date for testing the GPU on colab (I don't think test_precision is executed for unit testing in this repo when I clone it on colab)
Sure. [In the meantime, you may just use the flag |
|
Right..I think we should be good. I noticed the assertion failure in the minimum version testing is for: And, it fails. I tried to change the version of python and/or numpy to find what version preserves the symmetry property of dot product but couldn't find it as I still got failure! [which is strange since when I checked it in colab with python Is it possible that a specific versio of numpy works with a specific version of python considering their patches? |
It's certainly possible. However, you're only calling I think this is worth posting as an issue on the |
|
@NimaSarajpoor In case it might make our lives easier, I was thinking about bumping the minimum Python version since Python 3.7 reached end of life about a month ago. Of course, it'll require some work to figure out what other corresponding minimum |
I created a new conda env with just python and numpy (so no numba), and got the assetion failure.
The assertion will not fail if I do I tried to do: And, I tried the case above and got assetion failure. [It seems strange to me] [not sure if it is related or not] |
Dropping support for 3.7 may not be a bad idea as I noticed several packages did the same. I am still curious to understand why setting up my conda env with python 3.8 and installing a numpy package does not resolve the assertion failure issue related to |
|
@seanlaw |
See issue #897 |
|
@NimaSarajpoor What do you think? Is this ready to be merged? |
|
I think it is ready 👍 |
|
Btw, one of the new test func has no comment. Is that okay? |
|
[update]
I took care of it. Ready for review and merge [if nothing else needs to be done here] |
|
Thanks for working on this @NimaSarajpoor ! |

Fix #881